learning versatile filter
Learning Versatile Filters for Efficient Convolutional Neural Networks
This paper introduces versatile filters to construct efficient convolutional neural network. Considering the demands of efficient deep learning techniques running on cost-effective hardware, a number of methods have been developed to learn compact neural networks. Most of these works aim to slim down filters in different ways, e.g., investigating small, sparse or binarized filters. In contrast, we treat filters from an additive perspective. A series of secondary filters can be derived from a primary filter. These secondary filters all inherit in the primary filter without occupying more storage, but once been unfolded in computation they could significantly enhance the capability of the filter by integrating information extracted from different receptive fields. Besides spatial versatile filters, we additionally investigate versatile filters from the channel perspective. The new techniques are general to upgrade filters in existing CNNs. Experimental results on benchmark datasets and neural networks demonstrate that CNNs constructed with our versatile filters are able to achieve comparable accuracy as that of original filters, but require less memory and FLOPs.
Reviews: Learning Versatile Filters for Efficient Convolutional Neural Networks
The paper introduces two new types of convolutional filters, named versatile filters, which can reduce the memory and FLOP requirements of conv. The method is simple and, according to the experimental results, it seems to be effective. The text quality is OK, although it would definitively benefit from better explanations about the proposed method. For instance, Figure 1 is a bit confusing (Are you showing 4 different filters in Fig 1 (b)?). My main concerns about this paper are related to the experiments and results, as detailed in the following questions: (1) Regarding the FLOP reduction, it is not clear how the reduction in the number of computations is really achieved.
Learning Versatile Filters for Efficient Convolutional Neural Networks
Wang, Yunhe, Xu, Chang, XU, Chunjing, Xu, Chao, Tao, Dacheng
This paper introduces versatile filters to construct efficient convolutional neural network. Considering the demands of efficient deep learning techniques running on cost-effective hardware, a number of methods have been developed to learn compact neural networks. Most of these works aim to slim down filters in different ways, e.g., investigating small, sparse or binarized filters. In contrast, we treat filters from an additive perspective. A series of secondary filters can be derived from a primary filter.